Abstract—This article presents a floating-point exponential operator generator targeting recent FPGAs with embedded memories and DSP blocks. A single-precision operator consumes just one DSP block, 18Kbits of dual-port memory, and 392 slices on Virtex-4. For larger precisions, a generic approach based on polynomial approximation is used and proves more resource-efficient than the literature. For instance a double-precision operator consumes 5 BlockRAM and 12 DSP48 blocks on Virtex-5, or 10 M9k and 22 18x18 multipliers on Stratix III. This approach is flexible, scales well beyond double-precision, and enables frequencies close to the FPGA’s nominal frequency. All the proposed architectures are last-bit accurate for all the floating-point ran...
International audienceFloating-point operators on FPGAs do not have to be identical to the ones avai...
Due to their potential performance and unmatched flexibility, FPGA-based accelerators are part of mo...
Many computationally intensive scientific applications involve repetitive floating point operations ...
International audienceThis article presents a floating-point exponential operator generator targetin...
As FPGAs are increasingly being used for floating-point computing, the feasibility of a library of f...
This article addresses the development of complex, heavily parameterized and flexible operators to b...
The study of specific hardware circuits for the evalu-ation of floating-point elementary functions w...
International audienceThe high performance and capacity of current FPGAs makes them suitable as acce...
The high performance and capacity of current FPGAs makes them suitable as acceleration co-processors...
This paper presents FloPoCo, a framework for easily designing custom arithmetic datapaths for FPGAs....
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific res...
The advent of reconfigurable co-processors based on field-programmable gate arrays has renewed inter...
It has been shown that FPGAs could outperform high-end microprocessors on floating-point computation...
Abstract—With the density of FPGAs steadily increasing, FPGAs have reached the point where they are ...
International audienceFloating-point operators on FPGAs do not have to be identical to the ones avai...
Due to their potential performance and unmatched flexibility, FPGA-based accelerators are part of mo...
Many computationally intensive scientific applications involve repetitive floating point operations ...
International audienceThis article presents a floating-point exponential operator generator targetin...
As FPGAs are increasingly being used for floating-point computing, the feasibility of a library of f...
This article addresses the development of complex, heavily parameterized and flexible operators to b...
The study of specific hardware circuits for the evalu-ation of floating-point elementary functions w...
International audienceThe high performance and capacity of current FPGAs makes them suitable as acce...
The high performance and capacity of current FPGAs makes them suitable as acceleration co-processors...
This paper presents FloPoCo, a framework for easily designing custom arithmetic datapaths for FPGAs....
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific res...
The advent of reconfigurable co-processors based on field-programmable gate arrays has renewed inter...
It has been shown that FPGAs could outperform high-end microprocessors on floating-point computation...
Abstract—With the density of FPGAs steadily increasing, FPGAs have reached the point where they are ...
International audienceFloating-point operators on FPGAs do not have to be identical to the ones avai...
Due to their potential performance and unmatched flexibility, FPGA-based accelerators are part of mo...
Many computationally intensive scientific applications involve repetitive floating point operations ...